Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces. In cases where the transition dynamics can be readily evaluated at specified states (e.g., via a simulator), agents can operate in what is often referred to as planning with a \emph{generative model}. We propose the AE-LSVI algorithm for best-policy identification, a novel variant of the kernelized least-squares value iteration (LSVI) algorithm that combines optimism with pessimism for active exploration (AE). AE-LSVI provably identifies a near-optimal policy \emph{uniformly} over an entire state space and achieves polynomial sample complexity guarantees that are independent of the number of states. When specialized to the recently introduced offline contextual Bayesian optimization setting, our algorithm achieves improved sample complexity bounds. Experimentally, we demonstrate that AE-LSVI outperforms other RL algorithms in a variety of environments when robustness to the initial state is required.
translated by 谷歌翻译
We propose a principled way to define Gaussian process priors on various sets of unweighted graphs: directed or undirected, with or without loops. We endow each of these sets with a geometric structure, inducing the notions of closeness and symmetries, by turning them into a vertex set of an appropriate metagraph. Building on this, we describe the class of priors that respect this structure and are analogous to the Euclidean isotropic processes, like squared exponential or Mat\'ern. We propose an efficient computational technique for the ostensibly intractable problem of evaluating these priors' kernels, making such Gaussian processes usable within the usual toolboxes and downstream applications. We go further to consider sets of equivalence classes of unweighted graphs and define the appropriate versions of priors thereon. We prove a hardness result, showing that in this case, exact kernel computation cannot be performed efficiently. However, we propose a simple Monte Carlo approximation for handling moderately sized cases. Inspired by applications in chemistry, we illustrate the proposed techniques on a real molecular property prediction task in the small data regime.
translated by 谷歌翻译
Existing generalization bounds fail to explain crucial factors that drive generalization of modern neural networks. Since such bounds often hold uniformly over all parameters, they suffer from over-parametrization, and fail to account for the strong inductive bias of initialization and stochastic gradient descent. As an alternative, we propose a novel optimal transport interpretation of the generalization problem. This allows us to derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the earned prediction function in the data space. Therefore, our bounds are agnostic to the parametrization of the model and work well when the number of training samples is much smaller than the number of parameters. With small modifications, our approach yields accelerated rates for data on low-dimensional manifolds, and guarantees under distribution shifts. We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
translated by 谷歌翻译
In robotics, optimizing controller parameters under safety constraints is an important challenge. Safe Bayesian optimization (BO) quantifies uncertainty in the objective and constraints to safely guide exploration in such settings. Hand-designing a suitable probabilistic model can be challenging, however. In the presence of unknown safety constraints, it is crucial to choose reliable model hyper-parameters to avoid safety violations. Here, we propose a data-driven approach to this problem by meta-learning priors for safe BO from offline data. We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity. As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner via empirical uncertainty metrics and a frontier search algorithm. On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches while maintaining safety.
translated by 谷歌翻译
在评估目标时,在线优化嘈杂的功能需要在部署系统上进行实验,这是制造,机器人技术和许多其他功能的关键任务。通常,对安全输入的限制是未知的,我们只会获得嘈杂的信息,表明我们违反约束的距离有多近。但是,必须始终保证安全性,不仅是算法的最终输出。我们介绍了一种通用方法,用于在高维非线性随机优化问题中寻求一个固定点,其中在学习过程中保持安全至关重要。我们称为LB-SGD的方法是基于应用随机梯度下降(SGD),其精心选择的自适应步长大小到原始问题的对数屏障近似。我们通过一阶和零阶反馈提供了非凸,凸面和强键平滑约束问题的完整收敛分析。与现有方法相比,我们的方法通过维度可以更好地更新和比例。我们从经验上将样本复杂性和方法的计算成本比较现有的安全学习方法。除了合成基准测试之外,我们还证明了方法对在安全强化学习(RL)中政策搜索任务中最大程度地减少限制违规的有效性。
translated by 谷歌翻译
逆增强学习(IRL)是从专家演示中推断奖励功能的强大范式。许多IRL算法都需要已知的过渡模型,有时甚至是已知的专家政策,或者至少需要访问生成模型。但是,对于许多现实世界应用,这些假设太强了,在这些应用程序中,只能通过顺序相互作用访问环境。我们提出了一种新颖的IRL算法:逆增强学习(ACEIRL)的积极探索,该探索积极探索未知的环境和专家政策,以快速学习专家的奖励功能并确定良好的政策。 Aceirl使用以前的观察来构建置信区间,以捕获合理的奖励功能,并找到关注环境最有用区域的勘探政策。 Aceirl是使用样品复杂性界限的第一种活动IRL的方法,不需要环境的生成模型。在最坏情况下,Aceirl与活性IRL的样品复杂性与生成模型匹配。此外,我们建立了一个与问题相关的结合,该结合将Aceirl的样品复杂性与给定IRL问题的次级隔离间隙联系起来。我们在模拟中对Aceirl进行了经验评估,发现它的表现明显优于更幼稚的探索策略。
translated by 谷歌翻译
我们考虑使用图形结构数据定义的奖励函数的强盗优化问题。这个问题在分子设计和药物发现中具有重要的应用,在图形排列中,奖励自然不变。这种设置的主要挑战是扩展到大型域,以及带有许多节点的图形。我们通过将置换不变性嵌入我们的模型来解决这些挑战。特别是,我们表明图形神经网络(GNN)可用于估计奖励函数,假设它位于置换不变的加性核的再现内核希尔伯特空间。通过在此类内核与图形神经切线内核(GNTK)之间建立新的联系,我们介绍了第一个GNN信心绑定,并使用它来设计一个带有sublinear遗憾的相位脱口算法。我们的遗憾约束取决于GNTK的最大信息增益,我们也为此提供了界限。虽然奖励功能取决于所有$ n $节点功能,但我们的保证与图形节点$ n $的数量无关。从经验上讲,我们的方法在图形结构域上表现出竞争性能,并表现得很好。
translated by 谷歌翻译
时间序列无处不在,因此本质上很难分析,最终以标记或群集。随着物联网(IoT)及其智能设备的兴起,数据将大量收集。收集到的数据丰富的信息,因为人们可以实时检测事故(例如汽车),或者在给定的时间段内评估伤害/疾病(例如,健康设备)。由于其混乱的性质和大量数据点,时间剧本很难手动标记。此外,数据中的新类可能会随着时间的流逝而出现(与手写数字相反),这将需要重新标记数据。在本文中,我们提出了SUSL4TS,这是一种用于半无调学习的深层生成高斯混合模型,以对时间序列数据进行分类。通过我们的方法,我们可以减轻手动标记步骤,因为我们可以检测到稀疏标记的类(半监督)并识别隐藏在数据中的新兴类(无监督)。我们通过来自不同领域的既定时间序列分类数据集证明了方法的功效。
translated by 谷歌翻译
在将强化学习(RL)部署到现实世界系统中时,确保安全是一个至关重要的挑战。我们开发了基于置信的安全过滤器,这是一种基于概率动力学模型的标准RL技术,通过标准RL技术学到的名义策略来证明国家安全限制的控制理论方法。我们的方法基于对成本功能的国家约束的重新重新制定,从而将安全验证减少到标准RL任务。通过利用幻觉输入的概念,我们扩展了此公式,以确定对具有很高可能性的未知系统安全的“备份”策略。最后,在推出备用政策期间的每一个时间步骤中,标称政策的调整最少,以便以后可以保证安全恢复。我们提供正式的安全保证,并从经验上证明我们方法的有效性。
translated by 谷歌翻译
科学和工程学的一个主要挑战是设计实验,以了解一些未知数的兴趣。经典的实验设计最佳地分配了实验预算,以最大程度地提高实用性概念(例如,降低对未知数量的不确定性)。我们考虑一个丰富的设置,其中实验与{\ em Markov链}中的状态相关联,我们只能通过选择控制状态转换的{\ em策略}来选择它们。该问题从勘探学习中的探索到空间监视任务,从而捕获了重要的应用。我们提出了一种算法 - \ textsc {markov-design} - 有效地选择了其测量分配\ emph {可证明收敛到最佳One}的策略。该算法在本质上是顺序的,可以调整其过去测量所告知的策略(实验)的选择。除了我们的理论分析外,我们还展示了我们在生态监测和药理学中应用的框架。
translated by 谷歌翻译